539 research outputs found

    Introduction to the special section on dependable network computing

    Get PDF
    Dependable network computing is becoming a key part of our daily economic and social life. Every day, millions of users and businesses are utilizing the Internet infrastructure for real-time electronic commerce transactions, scheduling important events, and building relationships. While network traffic and the number of users are rapidly growing, the mean-time between failures (MTTF) is surprisingly short; according to recent studies, in the majority of Internet backbone paths, the MTTF is 28 days. This leads to a strong requirement for highly dependable networks, servers, and software systems. The challenge is to build interconnected systems, based on available technology, that are inexpensive, accessible, scalable, and dependable. This special section provides insights into a number of these exciting challenges

    MAGE: Nearly Zero-Cost Virtual Memory for Secure Computation

    Full text link
    Secure Computation (SC) is a family of cryptographic primitives for computing on encrypted data in single-party and multi-party settings. SC is being increasingly adopted by industry for a variety of applications. A significant obstacle to using SC for practical applications is the memory overhead of the underlying cryptography. We develop MAGE, an execution engine for SC that efficiently runs SC computations that do not fit in memory. We observe that, due to their intended security guarantees, SC schemes are inherently oblivious -- their memory access patterns are independent of the input data. Using this property, MAGE calculates the memory access pattern ahead of time and uses it to produce a memory management plan. This formulation of memory management, which we call memory programming, is a generalization of paging that allows MAGE to provide a highly efficient virtual memory abstraction for SC. MAGE outperforms the OS virtual memory system by up to an order of magnitude, and in many cases, runs SC computations that do not fit in memory at nearly the same speed as if the underlying machines had unbounded physical memory to fit the entire computation.Comment: 19 pages; Accepted to OSDI 202

    An analysis of a large scale habitat monitoring application

    Get PDF
    Habitat and environmental monitoring is a driving application for wireless sensor networks. We present an analysis of data from a second generation sensor networks deployed during the summer and autumn of 2003. During a 4 month deployment, these networks, consisting of 150 devices, produced unique datasets for both systems and biological analysis. This paper focuses on nodal and network performance, with an emphasis on lifetime, reliability, and the the static and dynamic aspects of single and multi-hop networks. We compare the results collected to expectations set during the design phase: we were able to accurately predict lifetime of the single-hop network, but we underestimated the impact of multihop traffic overhearing and the nuances of power source selection. While initial packet loss data was commensurate with lab experiments, over the duration of the deployment, reliability of the backend infrastructure and the transit network had a dominant impact on overall network performance. Finally, we evaluate the physical design of the sensor node based on deployment experience and a post mortem analysis. The results shed light on a number of design issues from network deployment, through selection of power sources to optimizations of routing decisions

    Demo: An Interoperability Development and Performance Diagnosis Environment

    Get PDF
    Interoperability is key to widespread adoption of sensor network technology, but interoperable systems have traditionally been difficult to develop and test. We demonstrate an interoperable system development and performance diagnosis environment in which different systems, different software, and different hardware can be simulated in a single network configuration. This allows both development, verification, and performance diagnosis of interoperable systems. Estimating the performance is important since even when systems interoperate, the performance can be sub-optimal, as shown in our companion paper that has been conditionally accepted for SenSys 2011

    10 Years Later: Cloud Computing is Closing the Performance Gap

    Full text link
    Can cloud computing infrastructures provide HPC-competitive performance for scientific applications broadly? Despite prolific related literature, this question remains open. Answers are crucial for designing future systems and democratizing high-performance computing. We present a multi-level approach to investigate the performance gap between HPC and cloud computing, isolating different variables that contribute to this gap. Our experiments are divided into (i) hardware and system microbenchmarks and (ii) user application proxies. The results show that today's high-end cloud computing can deliver HPC-competitive performance not only for computationally intensive applications but also for memory- and communication-intensive applications - at least at modest scales - thanks to the high-speed memory systems and interconnects and dedicated batch scheduling now available on some cloud platforms

    Dynamic Resource Management in a Static Network Operating System

    Get PDF
    We present novel approaches to managing three key resources in an event-driven sensornet OS: memory, energy, and peripherals. We describe the factors that necessitate using these new approaches rather than existing ones. A combination of static allocation and compile-time virtualization isolates resources from one another, while dynamic management provides the flexibility and sharing needed to minimize worst-case overheads. We evaluate the effectiveness and efficiency of these management policies in comparison to those of TinyOS 1.x, SOS, MOS, and Contiki. We show that by making memory, energy, and peripherals first-class abstractions, an OS can quickly, efficiently, and accurately adjust itself to the lowest possible power state, enable high performance applications when active, prevent memory corruption with little RAM overhead, and be flexible enough to support a broad range of devices and uses
    • …
    corecore